Comments about the article in Nature: Prestigious AI meeting takes steps to Improve Ethics of research

Following is a discussion about this article in Nature Vol 589 7 January 2021, by Davide Castelvecchi
To study the full text select this link: https://www.nature.com/articles/d41586-020-03611-8 In the last paragraph I explain my own opinion.

Contents

Reflection


Introduction

For the first time, the Neural Information Processing Systems (NeurIPS) meeting, which took place completely online this month, required presenters to submit a statement on the broader impact their research could have on society, including any possible negative effects.
Specific this last part is impossible to estimate. We all know that arms have possible negative effects. In fact they have more negative than positive effects, but still they are fabricated in huge numbers.
The influence of research, in all its aspects, should be equally distributed over the the whole world. It does not. That is a nagative aspect.
Researchers who work on machine learning are increasingly aware of the challenges posed by harmful uses of the technology, from the creation of falsified videos, or ‘deepfakes’, to mistakes by police who rely on facial-recognition algorithms in deciding who to arrest.
The problem is not on harmful uses, but the manufacturers of these system who promise too much. In fact the manufactures should supply detailed tests results how there systems perform and what they cannot do. The point is that top management should fully cooperate towards the researchers to publish all these results.
The ethical issues involved to modify 'videos' can run from good to bad.

1. Unintended Uses

2. Policing AI


Ethical thinking should be built into the machine-learning field rather than simply being outsourced to ethics specialists, she said, otherwise, “other disciplines could become the police while programmers try to evade them”.
I don't exactly know how AI systems are devolloped, but I expect that there are a set of lead engineers who are in charge how the system is designed of what the major parts or modules are. It are the programmers who develop the software under supervision of these lead engineers. Ethical thinking can then be one of these under supervision of one lead engineer. Of course any module can be outsourced but than it is very important to clearly specify what the interface of the module is with the rest of the system in order to get an integrated system.
Wallach and others, such as Donald Martin, a technical programme manager at Google in San Francisco, California, are working to redesign the product-development process at their companies so that it incorporates awareness of social context.
Both concepts require a clear and unambiguous definition.
AI ethics, Martin says, “is not a crisis in the public understanding of science, but a crisis in science’s understanding of the public”.
This sentence does not explain anything.
The revamped review process and the ethics-focused discussions are the latest in a series of efforts by NeurIPS organizers to improve practices in machine learning and AI.
This is a very difficult issue and it is important to clearly identify what you mean.
In 2018, the conference dropped an acronym that many people found offensive, and began a crackdown on sexist behaviour by participants.
The fact that participants show sexists behaviour, in some sense has nothing to do with Machine learning and AI. For more detail see this: nature 27 November 2018 AI conference, sexist behaviour.htm
And last year’s meeting featured robust discussions of AI ethics and inclusivity.
This requires a proper discussion of AI versus ethics.


Reflection 1 - Artificial Intelligence - Science - Ethics

Science in general has a strong relation with ethical issues. Science at the same can also be divided in good and bad science. In general, the understanding of what chemistry means we can consider as good science, because it is the starting point of making medical products, from which we can all benefit. At the same time performing certain chemical reactions we can produce products which are dangerous in its use, like gun powder. This we can call bad science, because only certain humans will benefit and others not. It is this selection mechanism, what will we select and why, which makes this an ethical issue.
The problem is to what extend computers can be involved in this decision process. That is extremely difficult. To try to solve that at least requires a more detailed description of what is involved.

If you want to give a comment you can use the following form Comment form


Created: 30 March 2018

Back to my home page Index
Back to Nature comments Nature Index